181 research outputs found

    The stepped wedge trial design: a systematic review

    Get PDF
    BACKGROUND: Stepped wedge randomised trial designs involve sequential roll-out of an intervention to participants (individuals or clusters) over a number of time periods. By the end of the study, all participants will have received the intervention, although the order in which participants receive the intervention is determined at random. The design is particularly relevant where it is predicted that the intervention will do more good than harm (making a parallel design, in which certain participants do not receive the intervention unethical) and/or where, for logistical, practical or financial reasons, it is impossible to deliver the intervention simultaneously to all participants. Stepped wedge designs offer a number of opportunities for data analysis, particularly for modelling the effect of time on the effectiveness of an intervention. This paper presents a review of 12 studies (or protocols) that use (or plan to use) a stepped wedge design. One aim of the review is to highlight the potential for the stepped wedge design, given its infrequent use to date. METHODS: Comprehensive literature review of studies or protocols using a stepped wedge design. Data were extracted from the studies in three categories for subsequent consideration: study information (epidemiology, intervention, number of participants), reasons for using a stepped wedge design and methods of data analysis. RESULTS: The 12 studies included in this review describe evaluations of a wide range of interventions, across different diseases in different settings. However the stepped wedge design appears to have found a niche for evaluating interventions in developing countries, specifically those concerned with HIV. There were few consistent motivations for employing a stepped wedge design or methods of data analysis across studies. The methodological descriptions of stepped wedge studies, including methods of randomisation, sample size calculations and methods of analysis, are not always complete. CONCLUSION: While the stepped wedge design offers a number of opportunities for use in future evaluations, a more consistent approach to reporting and data analysis is required

    Symptom Remission and Brain Cortical Networks at First Clinical Presentation of Psychosis: The OPTiMiSE Study

    Get PDF
    Individuals with psychoses have brain alterations, particularly in frontal and temporal cortices, that may be particularly prominent, already at illness onset, in those more likely to have poorer symptom remission following treatment with the first antipsychotic. The identification of strong neuroanatomical markers of symptom remission could thus facilitate stratification and individualized treatment of patients with schizophrenia. We used magnetic resonance imaging at baseline to examine brain regional and network correlates of subsequent symptomatic remission in 167 medication-naïve or minimally treated patients with first-episode schizophrenia, schizophreniform disorder, or schizoaffective disorder entering a three-phase trial, at seven sites. Patients in remission at the end of each phase were randomized to treatment as usual, with or without an adjunctive psycho-social intervention for medication adherence. The final follow-up visit was at 74 weeks. A total of 108 patients (70%) were in remission at Week 4, 85 (55%) at Week 22, and 97 (63%) at Week 74. We found no baseline regional differences in volumes, cortical thickness, surface area, or local gyrification between patients who did or did not achieved remission at any time point. However, patients not in remission at Week 74, at baseline showed reduced structural connectivity across frontal, anterior cingulate, and insular cortices. A similar pattern was evident in patients not in remission at Week 4 and Week 22, although not significantly. Lack of symptom remission in first-episode psychosis is not associated with regional brain alterations at illness onset. Instead, when the illness becomes a stable entity, its association with the altered organization of cortical gyrification becomes more defined

    Neurodevelopmental milestones and associated behaviours are similar among healthy children across diverse geographical locations.

    Get PDF
    It is unclear whether early child development is, like skeletal growth, similar across diverse regions with adequate health and nutrition. We prospectively assessed 1307 healthy, well-nourished 2-year-old children of educated mothers, enrolled in early pregnancy from urban areas without major socioeconomic or environmental constraints, in Brazil, India, Italy, Kenya and UK. We used a specially developed psychometric tool, WHO motor milestones and visual tests. Similarities across sites were measured using variance components analysis and standardised site differences (SSD). In 14 of the 16 domains, the percentage of total variance explained by between-site differences ranged from 1.3% (cognitive score) to 9.2% (behaviour score). Of the 80 SSD comparisons, only six were >±0.50 units of the pooled SD for the corresponding item. The sequence and timing of attainment of neurodevelopmental milestones and associated behaviours in early childhood are, therefore, likely innate and universal, as long as nutritional and health needs are met

    Prospective, randomized, double-blind, multi-center, Phase III clinical study on transarterial chemoembolization (TACE) combined with Sorafenib® versus TACE plus placebo in patients with hepatocellular cancer before liver transplantation – HeiLivCa [ISRCTN24081794]

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Disease progression of hepatocellular cancer (HCC) in patients eligible for liver transplantation (LTx) occurs in up to 50% of patients, resulting in withdrawal from the LTx waiting list. Transarterial chemoembolization (TACE) is used as bridging therapy with highly variable response rates. The oral multikinase inhibitor sorafenib significantly increases overall survival and time-to-progression in patients with advanced hepatocellular cancer.</p> <p>Design</p> <p>The HeiLivCa study is a double-blinded, controlled, prospective, randomized multi-centre phase III trial. Patients in study arm A will be treated with transarterial chemoembolization plus sorafenib 400 mg bid. Patients in study arm B will be treated with transarterial chemoembolization plus placebo. A total of 208 patients with histologically confirmed hepatocellular carcinoma or HCC diagnosed according to EASL criteria will be enrolled. An interim patients' analysis will be performed after 60 events. Evaluation of time-to-progression as primary endpoint (TTP) will be performed at 120 events. Secondary endpoints are number of patients reaching LTx, disease control rates, OS, progression free survival, quality of live, toxicity and safety.</p> <p>Discussion</p> <p>As TACE is the most widely used primary treatment of HCC before LTx and sorafenib is the only proven effective systemic treatment for advanced HCC there is a strong rational to combine both treatment modalities. This study is designed to reveal potential superiority of the combined TACE plus sorafenib treatment over TACE alone and explore a new neo-adjuvant treatment concept in HCC before LTx.</p

    Metformin:historical overview

    Get PDF
    Metformin (dimethylbiguanide) has become the preferred first-line oral blood glucose-lowering agent to manage type 2 diabetes. Its history is linked to Galega officinalis (also known as goat's rue), a traditional herbal medicine in Europe, found to be rich in guanidine, which, in 1918, was shown to lower blood glucose. Guanidine derivatives, including metformin, were synthesised and some (not metformin) were used to treat diabetes in the 1920s and 1930s but were discontinued due to toxicity and the increased availability of insulin. Metformin was rediscovered in the search for antimalarial agents in the 1940s and, during clinical tests, proved useful to treat influenza when it sometimes lowered blood glucose. This property was pursued by the French physician Jean Sterne, who first reported the use of metformin to treat diabetes in 1957. However, metformin received limited attention as it was less potent than other glucose-lowering biguanides (phenformin and buformin), which were generally discontinued in the late 1970s due to high risk of lactic acidosis. Metformin's future was precarious, its reputation tarnished by association with other biguanides despite evident differences. The ability of metformin to counter insulin resistance and address adult-onset hyperglycaemia without weight gain or increased risk of hypoglycaemia gradually gathered credence in Europe, and after intensive scrutiny metformin was introduced into the USA in 1995. Long-term cardiovascular benefits of metformin were identified by the UK Prospective Diabetes Study (UKPDS) in 1998, providing a new rationale to adopt metformin as initial therapy to manage hyperglycaemia in type 2 diabetes. Sixty years after its introduction in diabetes treatment, metformin has become the most prescribed glucose-lowering medicine worldwide with the potential for further therapeutic applications

    An initial application of computerized adaptive testing (CAT) for measuring disability in patients with low back pain

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Recent approaches to outcome measurement involving Computerized Adaptive Testing (CAT) offer an approach for measuring disability in low back pain (LBP) in a way that can reduce the burden upon patient and professional. The aim of this study was to explore the potential of CAT in LBP for measuring disability as defined in the International Classification of Functioning, Disability and Health (ICF) which includes impairments, activity limitation, and participation restriction.</p> <p>Methods</p> <p>266 patients with low back pain answered questions from a range of widely used questionnaires. An exploratory factor analysis (EFA) was used to identify disability dimensions which were then subjected to Rasch analysis. Reliability was tested by internal consistency and person separation index (PSI). Discriminant validity of disability levels were evaluated by Spearman correlation coefficient (r), intraclass correlation coefficient [ICC(2,1)] and the Bland-Altman approach. A CAT was developed for each dimension, and the results checked against simulated and real applications from a further 133 patients.</p> <p>Results</p> <p>Factor analytic techniques identified two dimensions named "body functions" and "activity-participation". After deletion of some items for failure to fit the Rasch model, the remaining items were mostly free of Differential Item Functioning (DIF) for age and gender. Reliability exceeded 0.90 for both dimensions. The disability levels generated using all items and those obtained from the real CAT application were highly correlated (i.e. > 0.97 for both dimensions). On average, 19 and 14 items were needed to estimate the precise disability levels using the initial CAT for the first and second dimension. However, a marginal increase in the standard error of the estimate across successive iterations substantially reduced the number of items required to make an estimate.</p> <p>Conclusion</p> <p>Using a combination approach of EFA and Rasch analysis this study has shown that it is possible to calibrate items onto a single metric in a way that can be used to provide the basis of a CAT application. Thus there is an opportunity to obtain a wide variety of information to evaluate the biopsychosocial model in its more complex forms, without necessarily increasing the burden of information collection for patients.</p

    Routine Outcomes Monitoring to Support Improving Care for Schizophrenia: Report from the VA Mental Health QUERI

    Get PDF
    In schizophrenia, treatments that improve outcomes have not been reliably disseminated. A major barrier to improving care has been a lack of routinely collected outcomes data that identify patients who are failing to improve or not receiving effective treatments. To support high quality care, the VA Mental Health QUERI used literature review, expert interviews, and a national panel process to increase consensus regarding outcomes monitoring instruments and strategies that support quality improvement. There was very good consensus in the domains of psychotic symptoms, side-effects, drugs and alcohol, depression, caregivers, vocational functioning, and community tenure. There are validated instruments and assessment strategies that are feasible for quality improvement in routine practice
    corecore